Kaggle has two strong assumptions:
1. Amount of data you have is fixed, however: https://anand.typepad.com/datawocky/2008/03/more-data-usual.html
2. Processing time is not important, however: https://www.wired.com/2012/04/netflix-prize-costs/
Public Personal Notebook
Kaggle has two strong assumptions:
1. Amount of data you have is fixed, however: https://anand.typepad.com/datawocky/2008/03/more-data-usual.html
2. Processing time is not important, however: https://www.wired.com/2012/04/netflix-prize-costs/
Glad to appear in my friend Rand Xie’s talk! See: 22:29
Thanks for mentioning me. See 17:10
https://ai.googleblog.com/2019/04/evaluating-unsupervised-learning-of.html
The choice of random seed across different runs has a larger impact on disentanglement scores than the model choice and the strength of regularization (while naively one might expect that more regularization should always lead to more disentanglement). A good run with a bad hyperparameter can easily beat a bad run with a good hyperparameter.