In broad strokes, we describe here some of the main topics that we are working on and that we're interested in developing. All the topics are related, and in fact our approach is one of open collaboration and non-compartmentalization, to maximize the benefit of multidisciplinary expertise gathered around the lab.
Machine Learning Theory
We aim at understanding the foundational principles underlying ML.
- We use techniques from statistical physics and applied probability, to study generalization capabilities in models of deep neural networks (DNN). The geometrical structures of the minima of the loss functions and their accessibility are main questions under investigation. These studies are based on the notion of High Local Entropy regions, or Wide Flat Minima, in the weight space.
- We are also interested in extending these notions to other form of learning, both supervised and unsupervised.
- We plan to study learning models based on the combinatorial nature (of features) of the data.
We study algorithms by analyzing their out-of-equilibrium behavior and by studying their capability of finding minima with good generalization properties. We are also developing various classes of algorithms based on the local entropy loss, ranging from MCMC to message-passing and SGD methods.
We are interested in the study of learning/discovery processes in models which are physically constrained (e.g. by precision limitations, energy constraints, memory capacity, deliberation time etc.) with the aim of reaching a more fundamental understanding of stochastic learning processes which are of interest both in the neurosciences and in novel hardware technologies.
Games and Decisions
A game is an interactive decision situation the final outcome of which depends on the choices of all Active Agents and on the realized state of the Environment. A decision making problem is a game with one Active Agent and the Environment. We study these situations from both a prescriptive and a descriptive perspective, with the aim of improving rational decision making of both humans and machines and of representing natural decision making.
Many scientific fields provide increasingly multidimensional and complex data along with novel motivating applications and methodological questions which require not only prediction or point estimation, but also careful uncertainty quantification relying on flexible and interpretable statistical models with theoretical guarantees. With these motivations in mind, we aim at providing methodological, theoretical and computational advances in statistical learning of complex data and processes with special emphasis on Bayesian (nonparametric) approaches.
We want to apply tools and algorithms from statistical physics, decision theory and machine learning to novel problems of optimization under uncertainty, for example those in which the loss function is (at least partially) unknown and it must be discovered while the landscape is explored.
Quantum Machine Learning
We study efficient ways to exploit the interaction between quantum fluctuations and the high-dimensional geometrical features of ML devices. The goal is to design quantum algorithms which can efficiently sample learning landscapes and rapidly access optimal solutions.
Applications are run in collaboration with other research groups or private companies which support our post-docs. They range from computational biology, to data analytics, processes optimization and predictions models.