*August 2019*

tl;dr: Self-paced learning based on homoscedastic uncertainty.

The paper spent much math details deriving the formulation of mutitask loss function based on the idea of maximizing the Gaussian likelihood with himoscedastic uncertainty. However once implemented, the formulation is extremely straightforward and easy to implement.

Methods | Learning Progress Signal | hyperparameters |
---|---|---|

Uncertainty Weighting | Homoscedastic Uncertainty | No hyperparameters |

GradNorm | Training loss ratio | 1 exponential weighting factor |

Dynamic Task Prioritization | KPI | 1 focal loss scaling factor |

- Uncertainties (for details refer to uncertainty in bayesian DL)
- Epistemic uncertainty: model uncertainty
- Aleatoric uncertainty: data uncertainty
- Data dependent (heteroscedastic) uncertainty
- Task dependent (homoscedastic) uncertainty: does not depend on input data. It stays constant for all data but varies between tasks.

- Modify each loss by uncertainty factor, $\sigma$. \(L \rightarrow \frac{1}{\sigma^2}L + \log\sigma\) This formulation can be easily generalized to almost any loss function. There is a task-specific parameter that can be learned and dynamically updated throughout learning.
- Instance segmentation is done in a way very similar to center net. Each

- Regress $\log \sigma^2$ instead of $\sigma^2$ directly. This exponential mapping allows to regress unbounded scalar values.

- OPTICS clustering algorithm (Ordering points to identify the clustering structure) is similar to DBSCAN, but less sensitive to parameter settings. See tutorial here and coursera video.
- TF implemenatation and pytorch implementation and keras implemenation.