Breaking News!
60% Off the Hottest Halloween Costumes & Accessories

Minimum Divergence Methods in Statistical Machine Learning

Best Price (Coupon Required):
Buy Minimum Divergence Methods in Statistical Machine Learning for $98.10 at @ Link.springer.com when you apply the 10% OFF coupon at checkout.
Click “Get Coupon & Buy” to copy the code and unlock the deal.

Set a price drop alert to never miss an offer.

1 Offer Price Range: $109.00 - $109.00
BEST PRICE

Single Product Purchase

$98.10
@ Link.springer.com with extra coupon

Price Comparison

Seller Contact Seller List Price On Sale Shipping Best Promo Final Price Volume Discount Financing Availability Seller's Page
BEST PRICE
1 Product Purchase
@ Link.springer.com
$109.00 $109.00

10% OFF
This deals requires coupon
$98.10
See Site In stock Visit Store

Product Details

Brand
Springer Nature
Manufacturer
N/A
Part Number
0
GTIN
9784431569206
Condition
New
Product Description

This book explores minimum divergence methods of statistical machine learning for estimation, regression, prediction, and so forth, in which we engage in information geometry to elucidate their intrinsic properties of the corresponding loss functions, learning algorithms, and statistical models. One of the most elementary examples is Gauss's least squares estimator in a linear regression model, in which the estimator is given by minimization of the sum of squares between a response vector and a vector of the linear subspace hulled by explanatory vectors. This is extended to Fisher's maximum likelihood estimator (MLE) for an exponential model, in which the estimator is provided by minimization of the Kullback-Leibler (KL) divergence between a data distribution and a parametric distribution of the exponential model in an empirical analogue. Thus, we envisage a geometric interpretation of such minimization procedures such that a right triangle is kept with Pythagorean identity in the sense of the KL divergence. This understanding sublimates a dualistic interplay between a statistical estimation and model, which requires dual geodesic paths, called m-geodesic and e-geodesic paths, in a framework of information geometry. We extend such a dualistic structure of the MLE and exponential model to that of the minimum divergence estimator and the maximum entropy model, which is applied to robust statistics, maximum entropy, density estimation, principal component analysis, independent component analysis, regression analysis, manifold learning, boosting algorithm, clustering, dynamic treatment regimes, and so forth. We consider a variety of information divergence measures typically including KL divergence to express departure from one probability distribution to another. An information divergence is decomposed into the cross-entropy and the (diagonal) entropy in which the entropy associates with a generative model as a family of maximum entropy distributions; the cross entropy associates with a statistical estimation method via minimization of the empirical analogue based on given data. Thus any statistical divergence includes an intrinsic object between the generative model and the estimation method. Typically, KL divergence leads to the exponential model and the maximum likelihood estimation. It is shown that any information divergence leads to a Riemannian metric and a pair of the linear connections in the framework of information geometry. We focus on a class of information divergence generated by an increasing and convex function U, called U-divergence. It is shown that any generator function U generates the U-entropy and U-divergence, in which there is a dualistic structure between the U-divergence method and the maximum U-entropy model. We observe that a specific choice of U leads to a robust statistical procedurevia the minimum U-divergence method. If U is selected as an exponential function, then the corresponding U-entropy and U-divergence are reduced to the Boltzmann-Shanon entropy and the KL divergence; the minimum U-divergence estimator is equivalent to the MLE. For robust supervised learning to predict a class label we observe that the U-boosting algorithm performs well for contamination of mislabel examples if U is appropriately selected. We present such maximal U-entropy and minimum U-divergence methods, in particular, selecting a power function as U to provide flexible performance in statistical machine learning.

Available Colors
Available Sizes

Reviews

0
0 reviews
5 stars
4 stars
3 stars
2 stars
1 star

Questions & Answers

Similar Products

Organ-Selective Actions of Steroid Hormones

Organ-Selective Actions of Steroid Hormones

$54.99
Hume and Husserl

Hume and Husserl

$109.99
Photochemical Conversion and Storage of Solar Energy

Photochemical Conversion and Storage of Solar Energy

$219.99
Die Verwaltung der ffentlichen Arbeiten in Preuen 1900 bis 1910

Die Verwaltung der ffentlichen Arbeiten in Preuen 1900 bis 1910

$59.99
Maximum Entropy and Bayesian Methods

Maximum Entropy and Bayesian Methods

$329.99
Der zerebrale Gefproze in der rztlichen Sprechstunde

Der zerebrale Gefproze in der rztlichen Sprechstunde

$49.99
Kompendium der Statik der Baukonstruktionen

Kompendium der Statik der Baukonstruktionen

$59.99
Political Economy of Development in Turkey

Political Economy of Development in Turkey

$169.99
Make Capitalism History

Make Capitalism History

$59.99
Scattering Theory of Waves and Particles

Scattering Theory of Waves and Particles

$199.99
BioNanotechnology

BioNanotechnology

$37.99
Female Entrepreneurship in Transition Economies

Female Entrepreneurship in Transition Economies

$84.99
A Practical Guide to Ore MicroscopyVolume 2

A Practical Guide to Ore MicroscopyVolume 2

$129.00
Empathy in Contemporary Poetry after Crisis

Empathy in Contemporary Poetry after Crisis

$64.99
Simply Seven

Simply Seven

$54.99
Handbuch Europarecht

Handbuch Europarecht

$179.00
Pancreatoduodenectomy

Pancreatoduodenectomy

$39.99
Slope Stochastic Dynamics

Slope Stochastic Dynamics

$159.99
Generation 50 plus

Generation 50 plus

$34.99
Centenary of the Famous 41

Centenary of the Famous 41

$84.99
Mediating Sovereign Debt Disputes

Mediating Sovereign Debt Disputes

$159.99
Squished: A Graphic Novel by Megan Wagner Lloyd

Squished: A Graphic Novel by Megan Wagner Lloyd

$12.99
Advanced Functional Programming

Advanced Functional Programming

$54.99
Der Niederfrequenz-Verstrker

Der Niederfrequenz-Verstrker

$44.99
Rural Development Planning in Africa

Rural Development Planning in Africa

$69.99
Die Wissenschaft des Subjekts

Die Wissenschaft des Subjekts

$64.99
Entrepreneurship, Technological Change and Circular Economy for a Green Transition

Entrepreneurship, Technological Change and Circular Economy for a Green Transition

$169.99
Visuelle Fhrung

Visuelle Fhrung

$17.99
Wege agiler Fhrung  mit Sinn

Wege agiler Fhrung mit Sinn

$17.99
An Introduction to Computational Micromechanics

An Introduction to Computational Micromechanics

$139.99
Aspects of Homogeneous Catalysis

Aspects of Homogeneous Catalysis

$39.99
Aggregation Functions in Theory and in Practice

Aggregation Functions in Theory and in Practice

$169.99
North American Perspectives on the Development of Public Relations

North American Perspectives on the Development of Public Relations

$54.99
Conversation Analysis and a Cultural-Historical Approach

Conversation Analysis and a Cultural-Historical Approach

$139.99
Agile Software Development Teams

Agile Software Development Teams

$39.99
Basics of Oncology

Basics of Oncology

$54.99
Entstehung von Unternehmenskrisen

Entstehung von Unternehmenskrisen

$79.99
Proceedings of the 3rd International Colloquium on Sports Science, Exercise, Engineering and Technol

Proceedings of the 3rd International Colloquium on Sports Science, Exercise, Engineering and Technol

$109.99
Biomonitors and Biomarkers as Indicators of Environmental Change 2

Biomonitors and Biomarkers as Indicators of Environmental Change 2

$129.00
Applied Mathematics for Restructured Electric Power Systems

Applied Mathematics for Restructured Electric Power Systems

$169.99
previous
next