The Infona portal uses cookies, i.e. strings of text saved by a browser on the user's device. The portal can access those files and use them to remember the user's data, such as their chosen settings (screen view, interface language, etc.), or their login data. By using the Infona portal the user accepts automatic saving and using this information for portal operation purposes. More information on the subject can be found in the Privacy Policy and Terms of Service. By closing this window the user confirms that they have read the information on cookie usage, and they accept the privacy policy and the way cookies are used by the portal. You can change the cookie settings in your browser.
Reduced basis methods for the approximation to parameter-dependent partial differential equations are now well-developed and start to be used for industrial applications. The classical implementation of the reduced basis method goes through two stages: in the first one, offline and time consuming, from standard approximation methods a reduced basis is constructed; then in a second stage, online and...
The Kullback-Leibler divergence is a widespread dissimilarity measure between probability density functions, based on the Shannon entropy. Unfortunately, there is no analytic formula available to compute this divergence between mixture models, imposing the use of costly approximation algorithms. In order to reduce the computational burden when a lot of divergence evaluations are needed, we introduce...
Gaussian mixture models are a widespread tool for modeling various and complex probability density functions. They can be estimated using Expectation- Maximization or Kernel Density Estimation. Expectation- Maximization leads to compact models but may be expensive to compute whereas Kernel Density Estimation yields to large models which are cheap to build. In this paper we present new methods to get...
We introduce an extension of the k-MLE algorithm, a fast algorithm for learning statistical mixture models relying on maximum likelihood estimators, which allows to build mixture of generalized Gaussian distributions without a fixed shape parameter. This allows us to model finely probability density functions which are made of highly non Gaussian components. We theoretically prove the local convergence...
Modeling data is often a critical step in many challenging applications in computer vision, bioinformatics or machine learning. Gaussian Mixture Models are a popular choice in many applications. Although these mixtures are powerful enough to approximate complex distributions, they may not be the best choice for some applications. Usual software mixtures libraries are often limited to a particular...
The scope of the well-known k-means algorithm has been broadly extended with some recent results: first, the k-means++ initialization method gives some approximation guarantees; second, the Bregman k-means algorithm generalizes the classical algorithm to the large family of Bregman divergences. The Bregman seeding framework combines approximation guarantees with Bregman divergences. We present here...
Bhattacharrya distance (BD) is a widely used distance in statistics to compare probability density functions (PDFs). It has shown strong statistical properties (in terms of Bayes error) and it relates to Fisher information. It has also practical advantages, since it strongly relates on measuring the overlap of the supports of the PDFs. Unfortunately, even with common parametric models on PDFs, few...
Set the date range to filter the displayed results. You can set a starting date, ending date or both. You can enter the dates manually or choose them from the calendar.