Source: Deep Learning on Medium
Adaloss: Adaptive Loss Function for Landmark Localization
Using heatmap → is good since we have a lot of gradients → but it is not accurate → hence they are combining two loss functions. (and this research would be not available
The loss area → is quite small and there are some functions that can take care of this → but mostly not. (there must be a way to take care of the loss function given here).
And we can see that adaptive loss functions can take care of that → as the training progresses → the landmark region becomes more accurate and precise.
Training deep learning models with highly sparse landmarks is very hard. (and the authors → designed a new loss function). (another approach was to use skip hourglass connections).
The approach is solving a simple problem → then we are going to solve more complicated problems.
Constant decay wouldn’t be a good idea. (since they are model dependent).
So if there is no change in the loss → we are going to use those values to calculate the variance. (depending on how the loss is → this is a better idea).
9 Layer UNet was used → so the network architecture is not new → just the loss functions.
With Adaloss → they are able to train in higher learning rate → and the result is more stable. (these results were done on a private dataset).
As the network trains, → the landmark becomes more precise and accurate.
The sigma value decreases → and when there is convergence → it stops. (and for different regions body parts → the sigma decreases in different rate).
They have achieved state of the art results NME.
They also tried this method on medical dataset as well → for surgery.
Surprisingly MSE → error is used for landmark detection. (adaptive loss is a good idea).