Description
Disentangling latent spaces is an important research direction in the interpretability of unsupervised machine learning. Several recent works using deep learning are very effective at producing disentangled representations. However, in the unsupervised setting, there is no way to pre-specify

Disentangling latent spaces is an important research direction in the interpretability of unsupervised machine learning. Several recent works using deep learning are very effective at producing disentangled representations. However, in the unsupervised setting, there is no way to pre-specify which part of the latent space captures specific factors of variations. While this is generally a hard problem because of the non-existence of analytical expressions to capture these variations, there are certain factors like geometric

transforms that can be expressed analytically. Furthermore, in existing frameworks, the disentangled values are also not interpretable. The focus of this work is to disentangle these geometric factors of variations (which turn out to be nuisance factors for many applications) from the semantic content of the signal in an interpretable manner which in turn makes the features more discriminative. Experiments are designed to show the modularity of the approach with other disentangling strategies as well as on multiple one-dimensional (1D) and two-dimensional (2D) datasets, clearly indicating the efficacy of the proposed approach.
Downloads
PDF (5.3 MB)
Download count: 3

Details

Title
  • Structured disentangling networks for learning deformation invariant latent spaces
Contributors
Date Created
2019
Resource Type
  • Text
  • Collections this item is in
    Note
    • thesis
      Partial requirement for: M.S., Arizona State University, 2019
    • bibliography
      Includes bibliographical references
    • Field of study: Electrical engineering

    Citation and reuse

    Statement of Responsibility

    by Kaushik Koneripalli Seetharam

    Machine-readable links