If you are interested in working with us, here are some additional projects which we would be happy on working on!
- Understanding the representations learned by contrastive learning
Self-supervised learning methods based on contrastive learning methods perform well by learning a representation useful for downstream tasks. Some recent works justify the quality of these representations through the recovery of latent generative models, multi-view hypotheses on the data, or empirical properties such as alignment and uniformity (https://arxiv.org/abs/2102.08850, https://arxiv.org/abs/2005.10242, https://arxiv.org/abs/2006.05576). In this project, we seek to experimentally study the types of representations learned with contrastive learning and check the validity of the proposed mechanisms. Conditioned on these investigations, theoretical or empirical follow-up work on contrastive learning motivated by the different views mentioned is possible (e.g., a study on novel target spaces for CL/SSL).
For more info please contact Oguz.