Plan for the October 6th, 2016 Columbia Advanced Machine Learning seminar series

Outline

  • Generative Adversarial Networks [1]

    • Introduction: Goal, learn (and sample) from a distribution.
    • Previous attempts (e.g. Restricted Boltzman machine)
    • Relation to noise contrastive estimation [7]
    • Algorithm
  • f Divergences as a more general framework [2].

    • What is a f divergence [9]
    • Fenchel conjugate [10], derivation of a lower bound [11,12]
    • De-mystification of GANs
  • Alternative methods: Maximum Mean Discrepancy Optimization [6,7] based in the Kernel Two sample test from [13]

  • Further discussion [3,4,5]

References

Main papers:

Additional papers

Related ideas from the 'oldies'

Upcoming NIPS workshop


In [ ]: