The counting loss counts the number of 'wrong' predictions a model makes. It is meant for classifiers in supervised learning, where the predicted class of the model, given an input, should be exactly the same as the target output.
\( L \) | This is the symbol for a loss function. It is a function that calculates how wrong a model's inference is compared to where it should be. |
\( h \) | This symbol denotes a model in machine learning. |
\( y \) | This symbol stands for the ground truth of a sample. In supervised learning this is often paired with the corresponding input. |
\( u \) | This symbol denotes the input of a model. |
The counting loss is a loss function so takes the form:
\[\htmlClass{sdt-0000000072}{L} : \htmlClass{sdt-0000000045}{\mathbb{R}}^{\htmlClass{sdt-0000000117}{n}} \times \htmlClass{sdt-0000000045}{\mathbb{R}}^{\htmlClass{sdt-0000000117}{n}} \rightarrow \htmlClass{sdt-0000000045}{\mathbb{R}}_{\geq 0}\]
The idea of the counting loss function revolves around the idea that the output of the model should be the same as the ground truth. We therefore have a higher loss when they match.
The symbol \(y\) represents the ground truth in a sample in machine learning. Samples come in pairs with the input and the ground truth or "target output"
The symbol for a model is \(h\). It represents a machine learning model that takes an input and gives an output.
The symbol \(u\) represents the input of a model.
Let \( \htmlClass{sdt-0000000037}{y} \) be some ground truth corresponding to an input \( \htmlClass{sdt-0000000103}{u} \). Where \(\htmlClass{sdt-0000000103}{u} = 1.4\) and \(\htmlClass{sdt-0000000037}{y} = 10\).
Now some model \( \htmlClass{sdt-0000000084}{h} \) takes the input and gives some prediction \(\htmlClass{sdt-0000000084}{h}(1.4) = 11\)
We can easily see that \(10 \ne 11\) and therefore \(\htmlClass{sdt-0000000084}{h}(\htmlClass{sdt-0000000103}{u}) \ne \htmlClass{sdt-0000000037}{y}\).
We can conclude that the loss is 1.