Brofos, James A; Lederman, Roy R Magnetic Manifold Hamiltonian Monte Carlo Technical Report 2020, (arXiv: 2010.07753). Abstract | Links | BibTeX | Tags: Algorithms, Computer Science - Machine Learning, HMC, Manifolds, MCMC, Statistics - Machine Learning @techreport{brofos_magnetic_2020, title = {Magnetic Manifold Hamiltonian Monte Carlo}, author = {James A Brofos and Roy R Lederman}, url = {http://arxiv.org/abs/2010.07753}, year = {2020}, date = {2020-10-01}, urldate = {2020-11-25}, abstract = {Markov chain Monte Carlo (MCMC) algorithms offer various strategies for sampling; the Hamiltonian Monte Carlo (HMC) family of samplers are MCMC algorithms which often exhibit improved mixing properties. The recently introduced magnetic HMC, a generalization of HMC motivated by the physics of particles influenced by magnetic field forces, has been demonstrated to improve the performance of HMC. In many applications, one wishes to sample from a distribution restricted to a constrained set, often manifested as an embedded manifold (for example, the surface of a sphere). We introduce magnetic manifold HMC, an HMC algorithm on embedded manifolds motivated by the physics of particles constrained to a manifold and moving under magnetic field forces. We discuss the theoretical properties of magnetic Hamiltonian dynamics on manifolds, and introduce a reversible and symplectic integrator for the HMC updates. We demonstrate that magnetic manifold HMC produces favorable sampling behaviors relative to the canonical variant of manifold-constrained HMC.}, note = {arXiv: 2010.07753}, keywords = {Algorithms, Computer Science - Machine Learning, HMC, Manifolds, MCMC, Statistics - Machine Learning}, pubstate = {published}, tppubtype = {techreport} } Markov chain Monte Carlo (MCMC) algorithms offer various strategies for sampling; the Hamiltonian Monte Carlo (HMC) family of samplers are MCMC algorithms which often exhibit improved mixing properties. The recently introduced magnetic HMC, a generalization of HMC motivated by the physics of particles influenced by magnetic field forces, has been demonstrated to improve the performance of HMC. In many applications, one wishes to sample from a distribution restricted to a constrained set, often manifested as an embedded manifold (for example, the surface of a sphere). We introduce magnetic manifold HMC, an HMC algorithm on embedded manifolds motivated by the physics of particles constrained to a manifold and moving under magnetic field forces. We discuss the theoretical properties of magnetic Hamiltonian dynamics on manifolds, and introduce a reversible and symplectic integrator for the HMC updates. We demonstrate that magnetic manifold HMC produces favorable sampling behaviors relative to the canonical variant of manifold-constrained HMC. |

Katz, Ori; Lederman, Roy R; Talmon, Ronen Spectral Flow on the Manifold of SPD Matrices for Multimodal Data Processing Technical Report 2020, (arXiv: 2009.08062). Abstract | Links | BibTeX | Tags: Common variable, Computer Science - Machine Learning, Manifold Learning, Multi-view, multimodal, SPD Matrices, Statistics - Machine Learning @techreport{katz_spectral_2020, title = {Spectral Flow on the Manifold of SPD Matrices for Multimodal Data Processing}, author = {Ori Katz and Roy R Lederman and Ronen Talmon}, url = {http://arxiv.org/abs/2009.08062}, year = {2020}, date = {2020-09-01}, urldate = {2020-11-25}, abstract = {In this paper, we consider data acquired by multimodal sensors capturing complementary aspects and features of a measured phenomenon. We focus on a scenario in which the measurements share mutual sources of variability but might also be contaminated by other measurement-specific sources such as interferences or noise. Our approach combines manifold learning, which is a class of nonlinear data-driven dimension reduction methods, with the well-known Riemannian geometry of symmetric and positive-definite (SPD) matrices. Manifold learning typically includes the spectral analysis of a kernel built from the measurements. Here, we take a different approach, utilizing the Riemannian geometry of the kernels. In particular, we study the way the spectrum of the kernels changes along geodesic paths on the manifold of SPD matrices. We show that this change enables us, in a purely unsupervised manner, to derive a compact, yet informative, description of the relations between the measurements, in terms of their underlying components. Based on this result, we present new algorithms for extracting the common latent components and for identifying common and measurement-specific components.}, note = {arXiv: 2009.08062}, keywords = {Common variable, Computer Science - Machine Learning, Manifold Learning, Multi-view, multimodal, SPD Matrices, Statistics - Machine Learning}, pubstate = {published}, tppubtype = {techreport} } In this paper, we consider data acquired by multimodal sensors capturing complementary aspects and features of a measured phenomenon. We focus on a scenario in which the measurements share mutual sources of variability but might also be contaminated by other measurement-specific sources such as interferences or noise. Our approach combines manifold learning, which is a class of nonlinear data-driven dimension reduction methods, with the well-known Riemannian geometry of symmetric and positive-definite (SPD) matrices. Manifold learning typically includes the spectral analysis of a kernel built from the measurements. Here, we take a different approach, utilizing the Riemannian geometry of the kernels. In particular, we study the way the spectrum of the kernels changes along geodesic paths on the manifold of SPD matrices. We show that this change enables us, in a purely unsupervised manner, to derive a compact, yet informative, description of the relations between the measurements, in terms of their underlying components. Based on this result, we present new algorithms for extracting the common latent components and for identifying common and measurement-specific components. |

Brofos, James A; Lederman, Roy R Non-Canonical Hamiltonian Monte Carlo Technical Report 2020, (arXiv: 2008.08191). Abstract | Links | BibTeX | Tags: Algorithms, Computer Science - Machine Learning, HMC, MCMC, Statistics - Machine Learning @techreport{brofos_non-canonical_2020, title = {Non-Canonical Hamiltonian Monte Carlo}, author = {James A Brofos and Roy R Lederman}, url = {http://arxiv.org/abs/2008.08191}, year = {2020}, date = {2020-01-01}, urldate = {2020-11-25}, abstract = {Hamiltonian Monte Carlo is typically based on the assumption of an underlying canonical symplectic structure. Numerical integrators designed for the canonical structure are incompatible with motion generated by non-canonical dynamics. These non-canonical dynamics, motivated by examples in physics and symplectic geometry, correspond to techniques such as preconditioning which are routinely used to improve algorithmic performance. Indeed, recently, a special case of non-canonical structure, magnetic Hamiltonian Monte Carlo, was demonstrated to provide advantageous sampling properties. We present a framework for Hamiltonian Monte Carlo using non-canonical symplectic structures. Our experimental results demonstrate sampling advantages associated to Hamiltonian Monte Carlo with non-canonical structure. To summarize our contributions: (i) we develop non-canonical HMC from foundations in symplectic geomtry; (ii) we construct an HMC procedure using implicit integration that satisfies the detailed balance; (iii) we propose to accelerate the sampling using an textbackslashem approximate explicit methodology; (iv) we study two novel, randomly-generated non-canonical structures: magnetic momentum and the coupled magnet structure, with implicit and explicit integration.}, note = {arXiv: 2008.08191}, keywords = {Algorithms, Computer Science - Machine Learning, HMC, MCMC, Statistics - Machine Learning}, pubstate = {published}, tppubtype = {techreport} } Hamiltonian Monte Carlo is typically based on the assumption of an underlying canonical symplectic structure. Numerical integrators designed for the canonical structure are incompatible with motion generated by non-canonical dynamics. These non-canonical dynamics, motivated by examples in physics and symplectic geometry, correspond to techniques such as preconditioning which are routinely used to improve algorithmic performance. Indeed, recently, a special case of non-canonical structure, magnetic Hamiltonian Monte Carlo, was demonstrated to provide advantageous sampling properties. We present a framework for Hamiltonian Monte Carlo using non-canonical symplectic structures. Our experimental results demonstrate sampling advantages associated to Hamiltonian Monte Carlo with non-canonical structure. To summarize our contributions: (i) we develop non-canonical HMC from foundations in symplectic geomtry; (ii) we construct an HMC procedure using implicit integration that satisfies the detailed balance; (iii) we propose to accelerate the sampling using an textbackslashem approximate explicit methodology; (iv) we study two novel, randomly-generated non-canonical structures: magnetic momentum and the coupled magnet structure, with implicit and explicit integration. |