TR98-21
Pattern Discovery via Entropy Minimization
-
- "Pattern Discovery via Entropy Minimization", Uncertainty 99: International Workshop on Artificial Intelligence and Statistics, January 1999. ,
-
MERL Contact:
Abstract:
We propose a framework for learning hidden-variable models by optimizing entropies, in which entropy minimization, posterior maximization, and free energy minimization are all equivalent. Solutions for the maximum a posteriori (MAP) estimator yield powerful learning algorithms that combine all the charms of expectation-maximization and deterministic annealing. Contained as special cases are the methods of maximum entropy, maximum likelihood, and a new method, maximum structure. We focus on the maximum structure case, in which entropy minimization maximizes the amount of evidence supporting each parameter while minimizing uncertainty in the sufficient statistics and cross-entropy between the model and the data. In iterative estimation, the MAP estimator gradually extinguishes excess parameters, sculpting a model structure that reflects hidden structures in the data. These models are highly resistant to over-fitting and have the particular virtue of being easy to interpret, often yielding insights into the hidden causes that generate the data.
Related News & Events
-
NEWS Uncertainty 99: International Workshop on Artificial Intelligence and Statistics: publication by Matthew Brand Date: January 31, 1999
Where: Uncertainty 99: International Workshop on Artificial Intelligence and Statistics
MERL Contact: Matthew BrandBrief- The paper "Pattern Discovery via Entropy Minimization" by Brand, M.E. was presented at the Uncertainty 99: International Workshop on Artificial Intelligence and Statistics.