For example, high obesity is generally predictive for heart disease, but this relation may not hold for smokers who generally have lower rates of obesity and higher rates of heart disease. However, unmeasured variables, such as confounders, break this assumption-useful correlations between features and labels at training time can become useless or even harmful at test time. The reliability of machine learning systems critically assumes that the associations between features and labels remain similar between training and test distributions. In International Conference on Machine Learning, 2020.įairness and Robustness with Missing Information Differentiating through the fréchet mean. Aaron Lou, Isay Katsman, Qingxuan Jiang, Serge Belongie, Ser-Nam Lim, and Christopher De Sa. To this end, I have written the ICML 2020 paper “Differentiating through the Fréchet Mean”, and am in the process of writing a new paper, “Riemannian Residual Neural Networks.” I will present both of these papers in light of the aforementioned motivation. In contexts where data points lie on non-trivial Riemannian manifolds, one must devise methods to properly learn over such data while respecting manifold structure. For example, consider representing the dynamics of segments in time series data by their covariance matrices, which lie on the manifold of symmetric positive definite (SPD) matrices. Although machine learning researchers have introduced a plethora of useful constructions for learning over Euclidean space, numerous types of data in various applications benefit from, if not necessitate, a non-Euclidean treatment.
0 Comments
Leave a Reply. |
Details
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |